quotes/notes on “Large AI models are cultural and social technologies” by Henry Farrell, Alison Gopnik, Cosma Shalizi, James Evans
Large Models should not be viewed primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated.
Markets, democracies and bureaucracies have relid on mechanisms that generate lossy (i.e. incomplete, selective and uninvertible) but useful representations well before the computer.
This idea of lossy representation feels like a glimpse of a kernel I’ve been trying to think towards recently. I think that has been related to the lossiness of language as a representation of concepts, and in particular the hazards of “high-concept”, abstract terms which create a lot of room for slippage in understood meaning between the communicator and receiver.
Douglas Hofstadter’s ideas about all communication entailing ‘encoding’ and ‘decoding’ from the “raw” internal human experience also comes to mind.
But this additional dimension of how we need to rely on lossy summaries of the state of our surroundings in order to parse the complexity of organised society is also superb.
Just as market prices are lossy representations of the underlying allocations and uses of resources, and government statistics and bureaucratic categories imperfectly represent the characteristics of underlying populations, so too large model are ‘lossy JPEGS’ of the data corpora on which they have been trained.
The Unlearning Economics videowhich lead me to this paper and which I’ll start writing about soon describes LLMs as a technology of summarisation, but I like how this article also includes that it’s equally a technology of transformation, bc I think that captures a lot of its power.
someone asking a bot for help writing a cover letter for a job applciation is really engaging in a technically mediated relationship with thousands of earlier job applicants and millions of other letter writers, RLHF workers etc.
this also speaks in a small way to the disorienting sense of indirection and loss of accountability in our systems that the UE video identifies. I think it also chimes with why I’ve found the idea of “computing as if it’s real” and materialist computing to be powerful. We have intentionally been left with no concrete understanding of where the things in our lives are coming from and why they’re happening - this omnipresent mirage is never more felt than when mysticising the outputs of an LLM.
We now have a technology that does for written and pictured culture what large-scale markets do for the economy, what large scale bureaucracy does for society, and perhaps even comparable to what print once did for language. What happens next?
The development of cultural technologies leads to a fundamental economic tension between the people who produce information and the systems that distribute it
They go on to state that the two groups (distributors and producers) have opposed economic incentives, in an argument I’m not totally sure I follow? They say,
The distributors will profit if they can access the producer’s information cheaply, while the producers will profit if they can get their information distributed cheaply.
Oh are we saying that e.g. if a publisher can pay a writer very little for their writing, and then charge a premium for selling on that writing to an audience, they do well; but a writer does well if they can make a large amount for the writing itself, and therefore need the product to be distributed cheaply for the overall offering to remain attractive to their consumers?
The ease and efficienty of distributing information in digital form has already made this problem especially acute
chiming again with the idea that infinite replicability of knowledge-work products in the digital age presents a fundamental contradiction with the attempt to treat them as products in a market economy?
Like markets and bureaucracies, they [LMs] will make some kinds of knowledge more visible and tractable than they were inthe past, encouragin policymakers to focus on the new things that they can measure and see at the expense of those less visible and more confusing.
[Engineers considering the ethics of LLMS] should go further. How will these systems affect who gets what? What will their practical consequences be for societal polarizationand integration? Can they be developed to enhance human creativity rather than to dull it?